73 research outputs found

    A quantitative probabilistic investigation into the accumulation of rounding errors in numerical ODE solution.

    Get PDF
    We examine numerical rounding errors of some deterministic solvers for systems of ordinary differential equations (ODEs) from a probabilistic viewpoint. We show that the accumulation of rounding errors results in a solution which is inherently random and we obtain the theoretical distribution of the trajectory as a function of time, the step size and the numerical precision of the computer. We consider, in particular, systems which amplify the effect of the rounding errors so that over long time periods the solutions exhibit divergent behaviour. By performing multiple repetitions with different values of the time step size, we observe numerically the random distributions predicted theoretically. We mainly focus on the explicit Euler and fourth order Rungeā€“Kutta methods but also briefly consider more complex algorithms such as the implicit solvers VODE and RADAU5 in order to demonstrate that the observed effects are not specific to a particular method

    Predicting Power Conversion Efficiency of Organic Photovoltaics: Models and Data Analysis.

    Get PDF
    Funder: Cambridge TrustFunder: National Research Foundation SingaporeFunder: Alexander von Humboldt-StiftungFunder: China Scholarship CouncilIn this paper, the ability of three selected machine learning neural and baseline models in predicting the power conversion efficiency (PCE) of organic photovoltaics (OPVs) using molecular structure information as an input is assessed. The bidirectional long short-term memory (gFSI/BiLSTM), attentive fingerprints (attentive FP), and simple graph neural networks (simple GNN) as well as baseline support vector regression (SVR), random forests (RF), and high-dimensional model representation (HDMR) methods are trained to both the large and computational Harvard clean energy project database (CEPDB) and the much smaller experimental Harvard organic photovoltaic 15 dataset (HOPV15). It was found that the neural-based models generally performed better on the computational dataset with the attentive FP model reaching a state-of-the-art performance with the test set mean squared error of 0.071. The experimental dataset proved much harder to fit, with all of the models exhibiting a rather poor performance. Contrary to the computational dataset, the baseline models were found to perform better than the neural models. To improve the ability of machine learning models to predict PCEs for OPVs, either better computational results that correlate well with experiments or more experimental data at well-controlled conditions are likely required

    Comment on ā€œA spherical cavity model for quadrupolar dielectricsā€ [J. Chem. Phys. 144, 114502 (2016)]

    Get PDF
    The dielectric properties of a fluid composed of molecules possessing both dipole and quadrupole moments are studied based on a model of the Onsager type (molecule in the centre of a spherical cavity). The dielectric permittivity Īµ and the macroscopic quadrupole polarizability Ī±Q of the fluid are related to the basic molecular characteristics (molecular dipole, polarizability, quadrupole, quadrupolarizability). The effect of Ī±Q is to increase the reaction field, to bring forth reaction field gradient, to decrease the cavity field and to bring forth cavity field gradient. The effects from the quadrupole terms are significant in the case of small cavity size in a non-polar liquid. The quadrupoles in the medium are shown to have small but measurable effect on the dielectric permittivity of several liquids (Ar, Kr, Xe, CH4, N2, CO2, CS2, C6H6, H2O, CH3OH). The theory is used to calculate the macroscopic quadrupolarizabilities of these fluids as functions of pressure and temperature. The cavity radii are also determined for these liquids, and it is shown that they are functions of density only. This extension of Onsagerā€™s theory will be important for non-polar solutions (fuel, crude oil, liquid CO2), especially at increased pressures

    Modelling TiO 2 formation in a stagnation flame using method of moments with interpolative closure

    No full text
    The stagnation flame synthesis of titanium dioxide nanoparticles from titanium tetraisopropoxide (TTIP) is modelled based on a simple one-step decomposition mechanism and one-dimensional stagnation flow. The particle model, which accounts for nucleation, surface growth, and coagulation, is fully-coupled to the flow and the gas phase chemistry and solved using the method of moments with interpolative closure (MoMIC). The model assumes no formation of aggregates considering the high temperature of the flame. In order to account for the free-jet region in the flow, the computational distance, H = 1.27 cm, is chosen based on the observed flame location in the experiment (for nozzle-stagnation distance, L = 3.4 cm). The model shows a good agreement with experimentally measured mobility particle size for stationary stagnation surface with varying TTIP loading, although the particle geometric standard deviation, GSD, is underpredicted for high TTIP loading. The particle size is predicted to be sensitive to the sampling location near the stagnation surface in the modelled flame. The sensitivity to the sampling location is found to increase with increasing precursor loading and stagnation temperature. Lastly, the effect of surface growth is evaluated by comparing the result with an alternative reaction model. It is found that surface growth plays an important role in the initial stage of particle growth which, if neglected, results in severe underprediction of particle size and overprediction of particle GSD.NRF (Natl Research Foundation, Sā€™pore)Accepted versio

    Knowledge Engineering in Chemistry: From Expert Systems to Agents of Creation.

    No full text
    Passing knowledge from human to human is a natural process that has continued since the beginning of humankind. Over the past few decades, we have witnessed that knowledge is no longer passed only between humans but also from humans to machines. The latter form of knowledge transfer represents a cornerstone in artificial intelligence (AI) and lays the foundation for knowledge engineering (KE). In order to pass knowledge to machines, humans need to structure, formalize, and make knowledge machine-readable. Subsequently, humans also need to develop software that emulates their decision-making process. In order to engineer chemical knowledge, chemists are often required to challenge their understanding of chemistry and thinking processes, which may help improve the structure of chemical knowledge.Knowledge engineering in chemistry dates from the development of expert systems that emulated the thinking process of analytical and organic chemists. Since then, many different expert systems employing rather limited knowledge bases have been developed, solving problems in retrosynthesis, analytical chemistry, chemical risk assessment, etc. However, toward the end of the 20th century, the AI winters slowed down the development of expert systems for chemistry. At the same time, the increasing complexity of chemical research, alongside the limitations of the available computing tools, made it difficult for many chemistry expert systems to keep pace.In the past two decades, the semantic web, the popularization of object-oriented programming, and the increase in computational power have revitalized knowledge engineering. Knowledge formalization through ontologies has become commonplace, triggering the subsequent development of knowledge graphs and cognitive software agents. These tools enable the possibility of interoperability, enabling the representation of more complex systems, inference capabilities, and the synthesis of new knowledge.This Account introduces the history, the core principles of KE, and its applications within the broad realm of chemical research and engineering. In this regard, we first discuss how chemical knowledge is formalized and how a chemist's cognition can be emulated with the help of reasoning algorithms. Following this, we discuss various applications of knowledge graph and agent technology used to solve problems in chemistry related to molecular engineering, chemical mechanisms, multiscale modeling, automation of calculations and experiments, and chemist-machine interactions. These developments are discussed in the context of a universal and dynamic knowledge ecosystem, referred to as The World Avatar (TWA).This research was supported by the National Research Foundation, Prime Ministerā€™s Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme. AK and MK thank the Humboldt Foundation (Berlin, Germany) and the Isaac Newton Trust (Cambridge, UK) for a Feodor Lynen Fellowship. JB acknowledges financial support provided by CSC Cambridge International Scholarship from Cambridge Trust and China Scholarship Council. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising
    • ā€¦
    corecore